Being a scientist is not simply knowing facts; it is not being a Google on legs. Being a scientist implies an approach to knowledge, one in which supporting evidence and data are essential. Data are no good unless the methods of obtaining those data are sound. Yet our most prestigious journals featuring primary research (as opposed to reviews) show less and less concern about presenting methods as a prominent and essential part of a paper.
Downgrading the methods section
Often journals use combinations of devices to de-emphasize methods and get them out of the way. The most common is to put them in small, nearly microscopic, print. Another (and equally symbolic) device is to place the methods at the end of a paper, as an afterthought, when the reader has already been told what everything means. The journal Current Biology goes one step further: It dispenses with the introduction and starts off with a section labeled “Results and Discussion.” How can a discussion be maximally intelligible and illuminating without background on the problem being investigated and on the methods for conducting the investigation? A further distancing of the reader from the methods comes from putting technical details on the Web. Some journals get rid of the methods section altogether, splitting it between footnotes and figure legends, with a few essentials for comprehension left in the text; to find out what was actually done, the reader may have to search out and collate information from all three places. To make matters worse, a figure legend describing methods may span as much as three columns. Presumably column format is often used in journal articles because it is easier to read. Evidently some editors do not mind over-much if information on methods is not easy to read.
Given these obstacles, many readers probably do not devote as much attention to the methods as to other aspects of the paper. A more serious problem is that refereeing may be affected. In some cases of fraud, there have been complaints that authors have not provided sufficient methodological details for others to replicate the work (Adam and Knight 2002). The end result is not so different from what already happens in many journals without any intended fraud: It becomes hard to find the information necessary to make the technical (as opposed to editorial) comments that are supposed to be critical to judging the acceptability of a paper (Adam and Knight 2002).
Titles: Assertion versus description
Titles are a tone-setting part of a creation. Debussy put the titles of his preludes at the end of the score, encouraging musicians and listeners to form their own impressions and react in their own way before he revealed his own source of inspiration. This format might be unsuitable for scientific literature, and yet the other extreme of making the title into a declaratory sentence that tells the reader what the material adds up to is equally unsuitable. Nowadays titles often take the form of a simple transitive sentence, such as “Substance X Stimulates/Controls/Inhibits the Appearance of Phenomenon Y.” The results may be striking, but the interpretation promoted in the title may not be the only possible explanation of the findings. Nonetheless, in many articles, the discussion section has taken over the title from the methods and the results.
This is not to argue that declaratory titles should never be used. Let us at all costs avoid dull uniformity! But sometimes a title conveying what was manipulated and measured (in the genre of “The Effect of Variable x on Variable y”) may be preferable; it takes a more authentically scientific approach than that of telling readers what the answer is while at the same time making it harder to see if that is indeed so by downgrading the methods.
The disease of the declaratory has now spread from titles of papers to the headings of subsections (e.g., in the journals Cell and Neuron) and even to figure legends. The reader is told what to see in the figure. But a genuinely scientific evaluation of a paper or an inspection of a figure should be more like that type of assignment in which students are presented with methods and results sections and then asked to write the introduction and discussion.
Popular versus professional
Science and Nature, and to a lesser extent some other journals, often report findings of great importance. They are also immensely successful publications. Obviously they are doing something right. But the niche they dominate is not the most purely scientific. To be sure, exciting new findings are reported, but that is often done in a way that can be picked up easily by the popular press—an amalgam of a scientific report with a claim to priority, an extended abstract, and a press release. Along with this amalgam goes the marginalization of methods, and on occasion a speed of acceptance that raises doubts about the thoroughness of reviewing.
The downside of the success of Nature and Science in so effectively filling this needed niche is that too much attention is paid to articles published in these journals. Granting agencies, deans, presidents, and assessment committees weigh such publications too highly, as is well known to the editors of these journals. For instance, the editor of Science has expressed reservations about the winner-take-all model attached to publications in that journal; he cautioned against taking such publications too seriously (Kennedy 2002).
Why do not evaluation committees do the same? Perhaps it is the desire for a simple measure of something that is inherently complex. Professional scientists should put more weight on fuller accounts that include the methods and data as an essential part of a paper. These should be presented in sufficient detail to be worthy of going into the dusty and distinguished archives of scholarship, regardless of any momentary efflorescence in brighter but more ephemeral media.
Meanwhile, like my colleagues, I will continue to read and to enjoy Science and Nature for what they are; but as a professional I will get more pleasure from and reserve more admiration for a different type of paper—and will long for a journal in which separate and completely self-sufficient sections of methods and results appear in large print, and in which, if so required for economy, the introduction and discussion are relegated to reduced text size!
References cited
Notes
[1] Nicholas Mrosovsky (e-mail: mro@zoo.utoronto.ca) is an emeritus professor in the Departments of Zoology, Psychology, and Physiology at the University of Toronto, Canada. His research centers on nonphotic resetting of circadian clocks.